45 research outputs found
Statistical Network Analysis for Functional MRI: Summary Networks and Group Comparisons
Comparing weighted networks in neuroscience is hard, because the topological
properties of a given network are necessarily dependent on the number of edges
of that network. This problem arises in the analysis of both weighted and
unweighted networks. The term density is often used in this context, in order
to refer to the mean edge weight of a weighted network, or to the number of
edges in an unweighted one. Comparing families of networks is therefore
statistically difficult because differences in topology are necessarily
associated with differences in density. In this review paper, we consider this
problem from two different perspectives, which include (i) the construction of
summary networks, such as how to compute and visualize the mean network from a
sample of network-valued data points; and (ii) how to test for topological
differences, when two families of networks also exhibit significant differences
in density. In the first instance, we show that the issue of summarizing a
family of networks can be conducted by adopting a mass-univariate approach,
which produces a statistical parametric network (SPN). In the second part of
this review, we then highlight the inherent problems associated with the
comparison of topological functions of families of networks that differ in
density. In particular, we show that a wide range of topological summaries,
such as global efficiency and network modularity are highly sensitive to
differences in density. Moreover, these problems are not restricted to
unweighted metrics, as we demonstrate that the same issues remain present when
considering the weighted versions of these metrics. We conclude by encouraging
caution, when reporting such statistical comparisons, and by emphasizing the
importance of constructing summary networks.Comment: 16 pages, 5 figure
Weighted Frechet Means as Convex Combinations in Metric Spaces: Properties and Generalized Median Inequalities
In this short note, we study the properties of the weighted Frechet mean as a
convex combination operator on an arbitrary metric space, (Y,d). We show that
this binary operator is commutative, non-associative, idempotent, invariant to
multiplication by a constant weight and possesses an identity element. We also
treat the properties of the weighted cumulative Frechet mean. These tools allow
us to derive several types of median inequalities for abstract metric spaces
that hold for both negative and positive Alexandrov spaces. In particular, we
show through an example that these bounds cannot be improved upon in general
metric spaces. For weighted Frechet means, however, such inequalities can
solely be derived for weights equal or greater than one. This latter limitation
highlights the inherent difficulties associated with working with
abstract-valued random variables.Comment: 7 pages, 1 figure. Submitted to Probability and Statistics Letter
Hypothesis Testing For Network Data in Functional Neuroimaging
In recent years, it has become common practice in neuroscience to use
networks to summarize relational information in a set of measurements,
typically assumed to be reflective of either functional or structural
relationships between regions of interest in the brain. One of the most basic
tasks of interest in the analysis of such data is the testing of hypotheses, in
answer to questions such as "Is there a difference between the networks of
these two groups of subjects?" In the classical setting, where the unit of
interest is a scalar or a vector, such questions are answered through the use
of familiar two-sample testing strategies. Networks, however, are not Euclidean
objects, and hence classical methods do not directly apply. We address this
challenge by drawing on concepts and techniques from geometry, and
high-dimensional statistical inference. Our work is based on a precise
geometric characterization of the space of graph Laplacian matrices and a
nonparametric notion of averaging due to Fr\'echet. We motivate and illustrate
our resulting methodologies for testing in the context of networks derived from
functional neuroimaging data on human subjects from the 1000 Functional
Connectomes Project. In particular, we show that this global test is more
statistical powerful, than a mass-univariate approach. In addition, we have
also provided a method for visualizing the individual contribution of each edge
to the overall test statistic.Comment: 34 pages. 5 figure
Classification Loss Function for Parameter Ensembles in Bayesian Hierarchical Models
Parameter ensembles or sets of point estimates constitute one of the
cornerstones of modern statistical practice. This is especially the case in
Bayesian hierarchical models, where different decision-theoretic frameworks can
be deployed to summarize such parameter ensembles. The estimation of these
parameter ensembles may thus substantially vary depending on which inferential
goals are prioritised by the modeller. In this note, we consider the problem of
classifying the elements of a parameter ensemble above or below a given
threshold. Two threshold classification losses (TCLs) --weighted and
unweighted-- are formulated. The weighted TCL can be used to emphasize the
estimation of false positives over false negatives or the converse. We prove
that the weighted and unweighted TCLs are optimized by the ensembles of
unit-specific posterior quantiles and posterior medians, respectively. In
addition, we relate these classification loss functions on parameter ensembles
to the concepts of posterior sensitivity and specificity. Finally, we find some
relationships between the unweighted TCL and the absolute value loss, which
explain why both functions are minimized by posterior medians.Comment: Submitted to Probability and Statistics Letter